In recent years, artificial intelligence has progressed at an unprecedented pace, fundamentally transforming industries such as healthcare, finance, and transportation. Many of these advancements have been fueled by deep learning algorithms, often described as black boxes due to their opaqueness in decision-making processes. While these algorithms have proven remarkably effective, their lack of transparency poses significant challenges, particularly concerning trust, accountability, and ethics. This has led to a growing demand for explainable AI (XAI), a field focused on making AI systems more understandable and interpretable to human users.